Goto

Collaborating Authors

 quantum state









Hierarchy of discriminative power and complexity in learning quantum ensembles

Yao, Jian, Li, Pengtao, Chen, Xiaohui, Zhuang, Quntao

arXiv.org Machine Learning

Distance metrics are central to machine learning, yet distances between ensembles of quantum states remain poorly understood due to fundamental quantum measurement constraints. We introduce a hierarchy of integral probability metrics, termed MMD-$k$, which generalizes the maximum mean discrepancy to quantum ensembles and exhibit a strict trade-off between discriminative power and statistical efficiency as the moment order $k$ increases. For pure-state ensembles of size $N$, estimating MMD-$k$ using experimentally feasible SWAP-test-based estimators requires $Θ(N^{2-2/k})$ samples for constant $k$, and $Θ(N^3)$ samples to achieve full discriminative power at $k = N$. In contrast, the quantum Wasserstein distance attains full discriminative power with $Θ(N^2 \log N)$ samples. These results provide principled guidance for the design of loss functions in quantum machine learning, which we illustrate in the training quantum denoising diffusion probabilistic models.


Universality of Many-body Projected Ensemble for Learning Quantum Data Distribution

Tran, Quoc Hoan, Chinzei, Koki, Endo, Yasuhiro, Oshima, Hirotaka

arXiv.org Machine Learning

Recent advancements highlight the pivotal role of quantum machine learning (QML) [4, 13] in processing quantum data derived from quantum systems [14]. A fundamental task in QML is generating quantum data by learning the underlying distribution, essential for understanding quantum systems, synthesizing new samples, and advancing applications in quantum chemistry and materials science. However, extending classical generative approaches to quantum data presents significant challenges, as quantum distributions exhibit superposition, entanglement, and non-locality that classical models struggle to replicate efficiently. Quantum generative models such as quantum generative adversarial networks [24, 42] and quantum variational autoencoders [20, 38] can be used to prepare a fixed single quantum state [21, 28, 37], but are inefficient for generating ensembles of quantum states [3] due to the need for training deep parameterized quantum circuits (PQCs). The quantum denoising diffusion probabilistic model [40] offers a promising approach that employs intermediate training steps to smoothly interpolate between the target distribution and noise, thereby enabling efficient training.


Statistical Analysis of Quantum State Learning Process in Quantum Neural Networks

Neural Information Processing Systems

Quantum neural networks (QNNs) have been a promising framework in pursuing near-term quantum advantage in various fields, where many applications can be viewed as learning a quantum state that encodes useful data. As a quantum analog of probability distribution learning, quantum state learning is theoretically and practically essential in quantum machine learning. In this paper, we develop a no-go theorem for learning an unknown quantum state with QNNs even starting from a high-fidelity initial state. We prove that when the loss value is lower than a critical threshold, the probability of avoiding local minima vanishes exponentially with the qubit count, while only grows polynomially with the circuit depth. The curvature of local minima is concentrated to the quantum Fisher information times a loss-dependent constant, which characterizes the sensibility of the output state with respect to parameters in QNNs. These results hold for any circuit structures, initialization strategies, and work for both fixed ansatzes and adaptive methods. Extensive numerical simulations are performed to validate our theoretical results. Our findings place generic limits on good initial guesses and adaptive methods for improving the learnability and scalability of QNNs, and deepen the understanding of prior information's role in QNNs.